I heard about the course from an email send by Prof. Kimmo, himself, and decided to join. So thanks Kimmo. I am so excited to learn about all things Data Science.
I have used R and Rstudio for about three years now. However, I have not yet tried out GitHub. I thought this would be a good chance and refresh my analytics skills.
Here is the link to my GitHub Josephine’s GitHub.
As I am a regular R user, I guess things like
Would be familiar. I was wondering if, during the course, we will also explore writing R packages.
This chapter focuses on performing and interpreting regression analysis.
The data is from an international survey of Approaches to Learning, made possible by Teachers’ Academy funding for KV in 2013-2015. The data has been filtered to include the desirable variables for analysis. The original data and variables descriptions can be found here.
data_analysis <- read.csv("learning2014.csv") # Read data from my local folder
str(data_analysis) # The data structure is data frame.
## 'data.frame': 166 obs. of 7 variables:
## $ gender : chr "F" "M" "F" "M" ...
## $ Age : int 53 55 49 53 49 38 50 37 37 42 ...
## $ attitude: num 3.7 3.1 2.5 3.5 3.7 3.8 3.5 2.9 3.8 2.1 ...
## $ deep : num 3.58 2.92 3.5 3.5 3.67 ...
## $ stra : num 3.38 2.75 3.62 3.12 3.62 ...
## $ surf : num 2.58 3.17 2.25 2.25 2.83 ...
## $ Points : int 25 12 24 10 22 21 21 31 24 26 ...
dim(data_analysis) # The data contains 166 observations or rows and 7 variables or columns.
## [1] 166 7
head(data_analysis, n = 3) # Three first rows
## gender Age attitude deep stra surf Points
## 1 F 53 3.7 3.583333 3.375 2.583333 25
## 2 M 55 3.1 2.916667 2.750 3.166667 12
## 3 F 49 2.5 3.500000 3.625 2.250000 24
tail(data_analysis, n = 3) # Three last rows
## gender Age attitude deep stra surf Points
## 164 F 18 3.7 3.166667 2.625 3.416667 18
## 165 F 19 3.6 3.416667 2.625 3.000000 30
## 166 M 21 1.8 4.083333 3.375 2.666667 19
library(GGally) # Access the GGally library
library(ggplot2) # Access the ggplot2 library
# create plot matrix with ggpairs() by gender.
ggpairs(data_analysis, mapping = aes(col = gender, alpha = 0.3), lower = list(combo = wrap("facethist", bins = 20)))
Interpretations:
summary(data_analysis) # summary of the variables
## gender Age attitude deep
## Length:166 Min. :17.00 Min. :1.400 Min. :1.583
## Class :character 1st Qu.:21.00 1st Qu.:2.600 1st Qu.:3.333
## Mode :character Median :22.00 Median :3.200 Median :3.667
## Mean :25.51 Mean :3.143 Mean :3.680
## 3rd Qu.:27.00 3rd Qu.:3.700 3rd Qu.:4.083
## Max. :55.00 Max. :5.000 Max. :4.917
## stra surf Points
## Min. :1.250 Min. :1.583 Min. : 7.00
## 1st Qu.:2.625 1st Qu.:2.417 1st Qu.:19.00
## Median :3.188 Median :2.833 Median :23.00
## Mean :3.121 Mean :2.787 Mean :22.72
## 3rd Qu.:3.625 3rd Qu.:3.167 3rd Qu.:27.75
## Max. :5.000 Max. :4.333 Max. :33.00
Interpretations:
For the regression model, the three chosen explanatory variables are attitude, stra, and surf. Their choice is based on the correlation analysis conducted on the step above. As it can be observed, the three variables are correlated with the response variable Points.
# Fit a multiple linear regression model using the lm() function
model1 <- lm(Points ~ attitude + stra + surf,
data = data_analysis)
# Summary of the fitted model
summary(model1)
##
## Call:
## lm(formula = Points ~ attitude + stra + surf, data = data_analysis)
##
## Residuals:
## Min 1Q Median 3Q Max
## -17.1550 -3.4346 0.5156 3.6401 10.8952
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 11.0171 3.6837 2.991 0.00322 **
## attitude 3.3952 0.5741 5.913 1.93e-08 ***
## stra 0.8531 0.5416 1.575 0.11716
## surf -0.5861 0.8014 -0.731 0.46563
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.296 on 162 degrees of freedom
## Multiple R-squared: 0.2074, Adjusted R-squared: 0.1927
## F-statistic: 14.13 on 3 and 162 DF, p-value: 3.156e-08
Interpretations: Both stra and surf variables are not statistically significant as their p-values are higher than 0.05 at 5% level of significance; the usually used significance level. Let us remove the variables one by one, starting from surf, and refit the model.
# Fit a new multiple linear regression model without the surf variable
model2 <- lm(Points ~ attitude + stra,
data = data_analysis)
# Summary of the new fitted model
summary(model2)
##
## Call:
## lm(formula = Points ~ attitude + stra, data = data_analysis)
##
## Residuals:
## Min 1Q Median 3Q Max
## -17.6436 -3.3113 0.5575 3.7928 10.9295
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 8.9729 2.3959 3.745 0.00025 ***
## attitude 3.4658 0.5652 6.132 6.31e-09 ***
## stra 0.9137 0.5345 1.709 0.08927 .
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.289 on 163 degrees of freedom
## Multiple R-squared: 0.2048, Adjusted R-squared: 0.1951
## F-statistic: 20.99 on 2 and 163 DF, p-value: 7.734e-09
Interpretations: In the new fitted model, the remaining variables seem to be statistically significant, at least up to 10% level of significance for the stra variable.
In linear regression, the model interpretation depends on the used functional form. In this case, the used functional form is the linear-linear one, which means there was no log-transformation of the data.
Therefore, the interpretation goes as follows "One unit increase in x[explanatory variable] results in Beta_1 (the estimated parameter) unit increase in y[response variable]. Additionally, as the case is a multiple linear regression, for the interpretation, one must hold other factors fixed and interpret one variable at a time.
Interpretations:
The R-squared is used to quantify how well the model fits the data. In simple linear regression, it is the quotient between the Sum Squared Explained (SSE) over Sum Squared Total (SST). The reason why, in multiple linear regression, it is recommended to assess the model fitting using the adjusted R squared, which take into accounts the number of the fitted parameters.
Interpretation: In the case of model2, Adjusted R squared = 0.1951, reflecting that the model explained 19.5% of the data. The higher the Adjusted R-squared, the better the model.
# Plot the diagnostics plots: Residuals vs Fitted values, Normal QQ-plot and Residuals vs Leverage
par(mfrow = c(2,2)) # Divide the window into a 2-by-2 sub-windows.
plot(model2, which = c(1,2,5))
Interpretation: most of the residuals points seem to lay on the line; the normality assumption is justified.
Interpretation: The residuals seem to be randomly distributed with respect to the fitted values; no pattern is observed; the assumption is justified.
Interpretation: No stand-up outlier is observed.
This chapter focuses on performing and interpreting logistic regression analysis.
The data are from two identical questionnaires related to secondary school student alcohol consumption in Portugal. Data source: UCI Machine Learning Repository. Metadata available here.
data_joined <- read.csv("pormath.csv") # Read data from my local folder
str(data_joined) # The data structure is data frame.
## 'data.frame': 370 obs. of 35 variables:
## $ school : chr "GP" "GP" "GP" "GP" ...
## $ sex : chr "F" "F" "F" "F" ...
## $ age : int 15 15 15 15 15 15 15 15 15 15 ...
## $ address : chr "R" "R" "R" "R" ...
## $ famsize : chr "GT3" "GT3" "GT3" "GT3" ...
## $ Pstatus : chr "T" "T" "T" "T" ...
## $ Medu : int 1 1 2 2 3 3 3 2 3 3 ...
## $ Fedu : int 1 1 2 4 3 4 4 2 1 3 ...
## $ Mjob : chr "at_home" "other" "at_home" "services" ...
## $ Fjob : chr "other" "other" "other" "health" ...
## $ reason : chr "home" "reputation" "reputation" "course" ...
## $ nursery : chr "yes" "no" "yes" "yes" ...
## $ internet : chr "yes" "yes" "no" "yes" ...
## $ alc_use : num 1 3 1 1 2.5 1 2 2 2.5 1 ...
## $ high_use : logi FALSE TRUE FALSE FALSE TRUE FALSE ...
## $ guardian : chr "mother" "mother" "mother" "mother" ...
## $ traveltime: int 2 1 1 1 2 1 2 2 2 1 ...
## $ studytime : int 4 2 1 3 3 3 3 2 4 4 ...
## $ schoolsup : chr "yes" "yes" "yes" "yes" ...
## $ famsup : chr "yes" "yes" "yes" "yes" ...
## $ activities: chr "yes" "no" "yes" "yes" ...
## $ higher : chr "yes" "yes" "yes" "yes" ...
## $ romantic : chr "no" "yes" "no" "no" ...
## $ famrel : int 3 3 4 4 4 4 4 4 4 4 ...
## $ freetime : int 1 3 3 3 2 3 2 1 4 3 ...
## $ health : int 1 5 2 5 3 5 5 4 3 4 ...
## $ goout : int 2 4 1 2 1 2 2 3 2 3 ...
## $ Walc : int 1 4 1 1 3 1 2 3 3 1 ...
## $ Dalc : int 1 2 1 1 2 1 2 1 2 1 ...
## $ failures : int 0 1 0 0 1 0 1 0 0 0 ...
## $ paid : chr "yes" "no" "no" "no" ...
## $ absences : int 3 2 8 2 5 2 0 1 9 10 ...
## $ G1 : int 10 10 14 10 12 12 11 10 16 10 ...
## $ G2 : int 12 8 13 10 12 12 6 10 16 10 ...
## $ G3 : int 12 8 12 9 12 12 6 10 16 10 ...
dim(data_joined) # The data contains 370 observations or rows and 35 variables or columns.
## [1] 370 35
colnames(data_joined) # variables names
## [1] "school" "sex" "age" "address" "famsize"
## [6] "Pstatus" "Medu" "Fedu" "Mjob" "Fjob"
## [11] "reason" "nursery" "internet" "alc_use" "high_use"
## [16] "guardian" "traveltime" "studytime" "schoolsup" "famsup"
## [21] "activities" "higher" "romantic" "famrel" "freetime"
## [26] "health" "goout" "Walc" "Dalc" "failures"
## [31] "paid" "absences" "G1" "G2" "G3"
The chosen variables are: Student grades(G3), sex, absences, and failures.
The hypotheses are:
library(dplyr) # Access the dpyr library
library(ggplot2) # Access the ggplot2 library
data_joined %>% # produce summary statistics by group
group_by(sex, high_use) %>%
summarise(count = n(), mean_grade = mean(G3))
## # A tibble: 4 x 4
## # Groups: sex [2]
## sex high_use count mean_grade
## <chr> <lgl> <int> <dbl>
## 1 F FALSE 154 11.4
## 2 F TRUE 41 11.8
## 3 M FALSE 105 12.3
## 4 M TRUE 70 10.3
ggplot(data_joined, aes(x = high_use, y = G3, col = sex)) + geom_boxplot() + ylab("grade") + ggtitle("Student grades by alcohol consumption and sex")
Comments:
ggplot(data_joined, aes(x = high_use, y = absences, col = sex)) + geom_boxplot() + ggtitle("Student absences by alcohol consumption and sex")
Comments:
ggplot(data_joined, aes(failures, fill = high_use)) + geom_bar(position = "dodge") + ggtitle("Student failures by alcohol consumption")
Comments:
This analysis explores the relationship between the four chosen variables and the binary high/low alcohol consumption variable as the target variable.
# Fit a logistic regression model using the glm() function
glm_model1 <- glm(high_use ~ G3 + failures + absences + sex, data = data_joined, family = "binomial")
# Summary of the fitted model
summary(glm_model1)
##
## Call:
## glm(formula = high_use ~ G3 + failures + absences + sex, family = "binomial",
## data = data_joined)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -2.1561 -0.8429 -0.5872 1.0033 2.1393
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -1.38733 0.51617 -2.688 0.00719 **
## G3 -0.04671 0.03948 -1.183 0.23671
## failures 0.50382 0.22018 2.288 0.02213 *
## absences 0.09058 0.02322 3.901 9.56e-05 ***
## sexM 1.00870 0.24798 4.068 4.75e-05 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 452.04 on 369 degrees of freedom
## Residual deviance: 405.59 on 365 degrees of freedom
## AIC: 415.59
##
## Number of Fisher Scoring iterations: 4
# Coefficients of the model
coef(glm_model1)
## (Intercept) G3 failures absences sexM
## -1.38732604 -0.04670941 0.50382370 0.09057516 1.00870144
Interpretations:
Let us fit the model without G3 variable and an intercept to see all coefficients directly.
# Fit a new logistic regression model
glm_model2 <- glm(high_use ~ failures + absences + sex - 1, data = data_joined, family = "binomial")
# Summary of the fitted model
summary(glm_model2)
##
## Call:
## glm(formula = high_use ~ failures + absences + sex - 1, family = "binomial",
## data = data_joined)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -2.1550 -0.8430 -0.5889 1.0328 2.0374
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## failures 0.59759 0.20698 2.887 0.00389 **
## absences 0.09245 0.02323 3.979 6.91e-05 ***
## sexF -1.94150 0.23129 -8.394 < 2e-16 ***
## sexM -0.94418 0.19305 -4.891 1.00e-06 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 512.93 on 370 degrees of freedom
## Residual deviance: 406.99 on 366 degrees of freedom
## AIC: 414.99
##
## Number of Fisher Scoring iterations: 4
# Coefficients of the model
coef(glm_model2)
## failures absences sexF sexM
## 0.59759293 0.09245138 -1.94149584 -0.94418265
# compute odds ratios (OR)
OR <- coef(glm_model2) %>% exp
# compute confidence intervals (CI)
CI <- confint(glm_model2) %>% exp
# print out the odds ratios with their confidence intervals
cbind(OR, CI)
## OR 2.5 % 97.5 %
## failures 1.8177381 1.21997630 2.7642305
## absences 1.0968598 1.05041937 1.1508026
## sexF 0.1434892 0.08940913 0.2219242
## sexM 0.3889974 0.26411123 0.5638009
Interpretations:
# predict() the probability of high_use
probabilities <- predict(glm_model2, type = "response")
# add the predicted probabilities to 'data_joined'
data_joined <- mutate(data_joined, probability = probabilities)
# use the probabilities to make a prediction of high_use
data_joined <- mutate(data_joined, prediction = probability > 0.5)
# see the first ten original classes, predicted probabilities, and class predictions
select(data_joined, failures, absences, sex, high_use, probability, prediction) %>% head(10)
## failures absences sex high_use probability prediction
## 1 0 3 F FALSE 0.1592068 FALSE
## 2 1 2 F TRUE 0.2388490 FALSE
## 3 0 8 F FALSE 0.2311401 FALSE
## 4 0 2 F FALSE 0.1472175 FALSE
## 5 1 5 F TRUE 0.2928368 FALSE
## 6 0 2 F FALSE 0.1472175 FALSE
## 7 1 0 F FALSE 0.2068690 FALSE
## 8 0 1 F FALSE 0.1359851 FALSE
## 9 0 9 F TRUE 0.2479765 FALSE
## 10 0 10 F FALSE 0.2656157 FALSE
# tabulate the target variable versus the predictions
table(high_use = data_joined$high_use,
prediction = data_joined$prediction)
## prediction
## high_use FALSE TRUE
## FALSE 252 7
## TRUE 78 33
Comments:
Let us check the accuracy and loss functions of the model
# define a loss function (mean prediction error)
loss_func <- function(class, prob) {
n_wrong <- abs(class - prob) > 0.5
mean(n_wrong)
}
# call loss_func to compute the average number of wrong predictions in the (training) data
loss_func(class = data_joined$high_use, prob = data_joined$probability)
## [1] 0.2297297
Comments: The mean of incorrectly classified observations can be thought of as a penalty (loss) function for the classifier. Less penalty = good. The aim is to minimize the incorrectly classified observations. Model2 has a mean prediction error of about 23%.
library(boot)
cv <- cv.glm(data = data_joined, cost = loss_func, glmfit = glm_model2, K = 10)
# average number of wrong predictions in the cross validation
cv$delta[1]
## [1] 0.2378378
Comments: Yes, model2 has a small prediction error (0.23 error) compared to the model introduced in DataCamp (0.26 error).
This chapter focuses on performing and interpreting clustering and classification on the Boston data set.
The data are for the housing values in suburbs of Boston. The data are available from the MASS package and the variable descriptions can be found here.
library(MASS)
data("Boston") # load the data
str(Boston) # A data frame
## 'data.frame': 506 obs. of 14 variables:
## $ crim : num 0.00632 0.02731 0.02729 0.03237 0.06905 ...
## $ zn : num 18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
## $ indus : num 2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
## $ chas : int 0 0 0 0 0 0 0 0 0 0 ...
## $ nox : num 0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
## $ rm : num 6.58 6.42 7.18 7 7.15 ...
## $ age : num 65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
## $ dis : num 4.09 4.97 4.97 6.06 6.06 ...
## $ rad : int 1 2 2 3 3 3 5 5 5 5 ...
## $ tax : num 296 242 242 222 222 222 311 311 311 311 ...
## $ ptratio: num 15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
## $ black : num 397 397 393 395 397 ...
## $ lstat : num 4.98 9.14 4.03 2.94 5.33 ...
## $ medv : num 24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...
dim(Boston)
## [1] 506 14
Comment: The data frame has 506 rows and 14 columns. All variables are numeric, with the variable chas: Charles River dummy variable (= 1 if tract bounds river; 0 otherwise).
# plot matrix of the variables
pairs(Boston,
col = "blue", # Change color
pch = 18, # Change shape of points
main = "Matrix plot of the variables") # Add a main title
library(corrplot) # access the corrplot library
# visualize the upper correlation matrix
corrplot(cor_matrix, method="circle", type = "upper")
Interpretations:
There seems to be a positive correlation between per capita crime rate by town (crim) and the index of accessibility to radial highways (rad) and also full-value property-tax rate per $10,000 (tax).
A slightly positive correlation between per capita crime rate by town (crim) with proportion of non-retail business acres per town (indus), nitrogen oxides concentration (parts per 10 million) (nox), and lower status of the population (percent) (lstat).
A positive correlation between the proportion of residential land zoned for lots over 25,000 square feet (zn) and the weighted mean of distances to five Boston employment centres (dis).
A positive correlation between the proportion of non-retail business acres per town (indus) with nitrogen oxides concentration (parts per 10 million) (nox), proportion of owner-occupied units built prior to 1940 (age), index of accessibility to radial highways (rad), full-value property-tax rate per $10,000 (tax), and lower status of the population (percent) (lstat).
A positive correlation between average number of rooms per dwelling (rm) with median value of owner-occupied homes in $1000s (medv).
A negative correlation between: lower status of the population (percent) (lstat) and median value of owner-occupied homes in $1000s (medv).
Moreover, three variables are negatively correlated with the weighted mean of distances to five Boston employment centres (dis). Those are proportion of owner-occupied units built prior to 1940 (age), nitrogen oxides concentration (parts per 10 million) (nox), and proportion of non-retail business acres per town (indus).
summary(Boston)
## crim zn indus chas
## Min. : 0.00632 Min. : 0.00 Min. : 0.46 Min. :0.00000
## 1st Qu.: 0.08205 1st Qu.: 0.00 1st Qu.: 5.19 1st Qu.:0.00000
## Median : 0.25651 Median : 0.00 Median : 9.69 Median :0.00000
## Mean : 3.61352 Mean : 11.36 Mean :11.14 Mean :0.06917
## 3rd Qu.: 3.67708 3rd Qu.: 12.50 3rd Qu.:18.10 3rd Qu.:0.00000
## Max. :88.97620 Max. :100.00 Max. :27.74 Max. :1.00000
## nox rm age dis
## Min. :0.3850 Min. :3.561 Min. : 2.90 Min. : 1.130
## 1st Qu.:0.4490 1st Qu.:5.886 1st Qu.: 45.02 1st Qu.: 2.100
## Median :0.5380 Median :6.208 Median : 77.50 Median : 3.207
## Mean :0.5547 Mean :6.285 Mean : 68.57 Mean : 3.795
## 3rd Qu.:0.6240 3rd Qu.:6.623 3rd Qu.: 94.08 3rd Qu.: 5.188
## Max. :0.8710 Max. :8.780 Max. :100.00 Max. :12.127
## rad tax ptratio black
## Min. : 1.000 Min. :187.0 Min. :12.60 Min. : 0.32
## 1st Qu.: 4.000 1st Qu.:279.0 1st Qu.:17.40 1st Qu.:375.38
## Median : 5.000 Median :330.0 Median :19.05 Median :391.44
## Mean : 9.549 Mean :408.2 Mean :18.46 Mean :356.67
## 3rd Qu.:24.000 3rd Qu.:666.0 3rd Qu.:20.20 3rd Qu.:396.23
## Max. :24.000 Max. :711.0 Max. :22.00 Max. :396.90
## lstat medv
## Min. : 1.73 Min. : 5.00
## 1st Qu.: 6.95 1st Qu.:17.02
## Median :11.36 Median :21.20
## Mean :12.65 Mean :22.53
## 3rd Qu.:16.95 3rd Qu.:25.00
## Max. :37.97 Max. :50.00
Interpretations: On average,
# center and standardize variables
boston_scaled <- scale(Boston)
# change the object to data frame
boston_scaled <- as.data.frame(boston_scaled)
# summaries of the scaled variables
summary(boston_scaled)
## crim zn indus chas
## Min. :-0.419367 Min. :-0.48724 Min. :-1.5563 Min. :-0.2723
## 1st Qu.:-0.410563 1st Qu.:-0.48724 1st Qu.:-0.8668 1st Qu.:-0.2723
## Median :-0.390280 Median :-0.48724 Median :-0.2109 Median :-0.2723
## Mean : 0.000000 Mean : 0.00000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.007389 3rd Qu.: 0.04872 3rd Qu.: 1.0150 3rd Qu.:-0.2723
## Max. : 9.924110 Max. : 3.80047 Max. : 2.4202 Max. : 3.6648
## nox rm age dis
## Min. :-1.4644 Min. :-3.8764 Min. :-2.3331 Min. :-1.2658
## 1st Qu.:-0.9121 1st Qu.:-0.5681 1st Qu.:-0.8366 1st Qu.:-0.8049
## Median :-0.1441 Median :-0.1084 Median : 0.3171 Median :-0.2790
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.5981 3rd Qu.: 0.4823 3rd Qu.: 0.9059 3rd Qu.: 0.6617
## Max. : 2.7296 Max. : 3.5515 Max. : 1.1164 Max. : 3.9566
## rad tax ptratio black
## Min. :-0.9819 Min. :-1.3127 Min. :-2.7047 Min. :-3.9033
## 1st Qu.:-0.6373 1st Qu.:-0.7668 1st Qu.:-0.4876 1st Qu.: 0.2049
## Median :-0.5225 Median :-0.4642 Median : 0.2746 Median : 0.3808
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 1.6596 3rd Qu.: 1.5294 3rd Qu.: 0.8058 3rd Qu.: 0.4332
## Max. : 1.6596 Max. : 1.7964 Max. : 1.6372 Max. : 0.4406
## lstat medv
## Min. :-1.5296 Min. :-1.9063
## 1st Qu.:-0.7986 1st Qu.:-0.5989
## Median :-0.1811 Median :-0.1449
## Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.6024 3rd Qu.: 0.2683
## Max. : 3.5453 Max. : 2.9865
How did the variables change?: The variables have been rescaled to have a mean of zero and a standard deviation of one. For a standardized variable, each case’s value on the standardized variable indicates it’s difference from the mean of the original variable in number of standard deviations (of the original variable).
# Create a categorical variable of the crime rate
# create a quantile vector of crim and print it
bins <- quantile(boston_scaled$crim)
# create a categorical variable 'crime'
crime <- cut(boston_scaled$crim, breaks = bins, include.lowest = TRUE, label = c("low", "med_low", "med_high", "high"))
# look at the table of the new factor crime
table(crime)
## crime
## low med_low med_high high
## 127 126 126 127
# Drop the old crime rate variable from the dataset
boston_scaled <- dplyr::select(boston_scaled, -crim)
# add the new categorical value to scaled data
boston_scaled <- data.frame(boston_scaled, crime)
# Divide the dataset to train and test sets
# number of rows in the Boston dataset
n <- nrow(boston_scaled)
# choose randomly 80% of the rows
ind <- sample(n, size = n * 0.8)
# create train set
train <- boston_scaled[ind,]
# create test set
test <- boston_scaled[-ind,]
# save the correct classes from test data
correct_classes <- test$crime
# remove the crime variable from test data
test <- dplyr::select(test, -crime)
# linear discriminant analysis
lda.fit <- lda(crime ~ ., data = train)
# the function for lda biplot arrows
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "red", tex = 0.75, choices = c(1,2)){
heads <- coef(x)
arrows(x0 = 0, y0 = 0,
x1 = myscale * heads[,choices[1]],
y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
text(myscale * heads[,choices], labels = row.names(heads),
cex = tex, col=color, pos=3)
}
# target classes as numeric
classes <- as.numeric(train$crime)
# plot the lda results
plot(lda.fit, dimen = 2, col = classes, pch = classes)
lda.arrows(lda.fit, myscale = 1.3)
# Save the crime categories from the test set and then remove the categorical crime variable from the test dataset. DONE -- See above steps.
# predict classes with test data
lda.pred <- predict(lda.fit, newdata = test)
# cross tabulate the results
tbl_lda <- table(correct = correct_classes, predicted = lda.pred$class)
tbl_lda; rowSums(tbl_lda)
## predicted
## correct low med_low med_high high
## low 10 12 2 0
## med_low 5 18 2 0
## med_high 2 9 17 0
## high 0 0 0 25
## low med_low med_high high
## 24 25 28 25
Comments: From the table, we see that the LDA predicts correctly:
data("Boston") # Reload
Re_data <- scale(Boston) # standardize it
# distances between the observations
dist_eu <- dist(Re_data)
# k-means clustering with 3 clusters
km <- kmeans(Re_data, centers = 3)
# investigate what is the optimal number of clusters
set.seed(123) # set the seed
# determine the number of clusters
k_max <- 10
# calculate the total within sum of squares
twcss <- sapply(1:k_max, function(k){kmeans(Re_data, k)$tot.withinss})
# visualize the results
library(ggplot2)
qplot(x = 1:k_max, y = twcss, geom = 'line')
Comment: The “scree plot” above helps to identify the appropriate number of clusters. The “elbow shape” suggests that two clusters (k = 2) is the potential candidate, since the total WCSS drops radically.
# run the algorithm again
# k-means clustering
km_new <- kmeans(Re_data, centers = 2)
# plot the Re_scale Boston dataset with 2 clusters
pairs(Re_data, col = km_new$cluster)
table(km_new$cluster)
##
## 1 2
## 329 177
Comment: With k = 2, clusters consist of 329 observations out of 506 in cluster 1, and cluster 2, 177 observations. The clusters are also separated by colour within the predictors. Some variables present a clear cut of observations, other it is a quite mix.
model_predictors <- dplyr::select(train, -crime)
# check the dimensions
dim(model_predictors)
## [1] 404 13
dim(lda.fit$scaling)
## [1] 13 3
# matrix multiplication
matrix_product <- as.matrix(model_predictors) %*% lda.fit$scaling
matrix_product <- as.data.frame(matrix_product)
library(plotly) # access plotly library
# 3D plot of the columns of the matrix
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers', surfacecolor = train$crime)
# another 3D plot with color defined by the clusters of the k-means
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers', surfacecolor = km_new$cluster)
How do the plots differ? Are there any similarities?: The two plots look relatively similar with a clear cut between clusters, and the number of observation within each group seems to be the same.
This chapter focuses on dimensionality reduction techniques such as principal component analysis (PCA) and Multiple correspondence analysis (MCA).
human <- read.csv("human_data") # Read the data from my local file
dim(human)
## [1] 155 9
str(human)
## 'data.frame': 155 obs. of 9 variables:
## $ X : chr "Norway" "Australia" "Switzerland" "Denmark" ...
## $ F_education : num 97.4 94.3 95 95.5 87.7 96.3 80.5 95.1 100 95 ...
## $ ratio_labour : num 0.891 0.819 0.825 0.884 0.829 ...
## $ Life_Expectancy : num 81.6 82.4 83 80.2 81.6 80.9 80.9 79.1 82 81.8 ...
## $ Years_Education : num 17.5 20.2 15.8 18.7 17.9 16.5 18.6 16.5 15.9 19.2 ...
## $ GNI_per_capita : int 64992 42261 56431 44025 45435 43919 39568 52947 42155 32689 ...
## $ Maternal_mortality: int 4 6 6 5 6 7 9 28 11 8 ...
## $ Adolescent_birth : num 7.8 12.1 1.9 5.1 6.2 3.8 8.2 31 14.5 25.3 ...
## $ In_parliament : num 39.6 30.5 28.5 38 36.9 36.9 19.9 19.4 28.2 31.4 ...
library(GGally) # Access the GGally library
library(dplyr) # Access the dplyr library
library(corrplot) # Access the corrplot library
# Remove the "Country" column
human_new <- dplyr::select(human, -X)
# visualize the 'human' variables
ggpairs(human_new, mapping = aes(alpha = 0.3))
# compute the correlation matrix and visualize it with corrplot
cor(human_new)%>%
corrplot(type = "upper")
Interpretations:
summary(human_new) # summary of variables
## F_education ratio_labour Life_Expectancy Years_Education
## Min. : 0.90 Min. :0.1857 Min. :49.00 Min. : 5.40
## 1st Qu.: 27.15 1st Qu.:0.5984 1st Qu.:66.30 1st Qu.:11.25
## Median : 56.60 Median :0.7535 Median :74.20 Median :13.50
## Mean : 55.37 Mean :0.7074 Mean :71.65 Mean :13.18
## 3rd Qu.: 85.15 3rd Qu.:0.8535 3rd Qu.:77.25 3rd Qu.:15.20
## Max. :100.00 Max. :1.0380 Max. :83.50 Max. :20.20
## GNI_per_capita Maternal_mortality Adolescent_birth In_parliament
## Min. : 581 Min. : 1.0 Min. : 0.60 Min. : 0.00
## 1st Qu.: 4198 1st Qu.: 11.5 1st Qu.: 12.65 1st Qu.:12.40
## Median : 12040 Median : 49.0 Median : 33.60 Median :19.30
## Mean : 17628 Mean : 149.1 Mean : 47.16 Mean :20.91
## 3rd Qu.: 24512 3rd Qu.: 190.0 3rd Qu.: 71.95 3rd Qu.:27.95
## Max. :123124 Max. :1100.0 Max. :204.80 Max. :57.50
Interpretations: On average,
# perform principal component analysis
pca_human_new <- prcomp(human_new)
summary(pca_human_new)
## Importance of components:
## PC1 PC2 PC3 PC4 PC5 PC6 PC7 PC8
## Standard deviation 1.854e+04 186.1920 25.97 19.25 11.42 3.723 1.431 0.1649
## Proportion of Variance 9.999e-01 0.0001 0.00 0.00 0.00 0.000 0.000 0.0000
## Cumulative Proportion 9.999e-01 1.0000 1.00 1.00 1.00 1.000 1.000 1.0000
# draw a biplot of the principal component
biplot(pca_human_new, choices = 1:2, col = c("blue", "red"))
## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped
## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped
## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped
## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped
Figure 1: CPA – Non standarized data
# standardize the variables
human_std <- scale(human_new)
# perform principal component analysis
pca_human_std <- prcomp(human_std)
summary(pca_human_std)
## Importance of components:
## PC1 PC2 PC3 PC4 PC5 PC6 PC7
## Standard deviation 2.1194 1.1478 0.89070 0.73763 0.55201 0.48552 0.44894
## Proportion of Variance 0.5615 0.1647 0.09917 0.06801 0.03809 0.02947 0.02519
## Cumulative Proportion 0.5615 0.7261 0.82532 0.89333 0.93142 0.96089 0.98608
## PC8
## Standard deviation 0.33372
## Proportion of Variance 0.01392
## Cumulative Proportion 1.00000
# draw a biplot of the principal component
biplot(pca_human_std, choices = 1:2, col = c("grey40", "deeppink2"))
Figure 2: CPA – Standarized data
Interpretations:
With and without standardizing, are the results different? Yes, with and without standardizing, results are very much different.
How and Why? In Figure 1 – PCA with no-standardized data, we can not really grasp the variability captured by the principal components. This is because PCA is sensitive to the relative scaling of the original features and assumes that features with larger variance are more important than features with smaller variance. That is probably the reason why the GNI_per_capita has a larger arrow. Also, 99% of variation explained by 1 PCA component.
Figure 2, on the other hand, displays how Standardization of the features before PCA is a crucial step. The PCA decomposes data into a product of smaller components and reveals the most important features. 98% of variation explained by 7 PCA components, as presented in the Cumulative Proportion. The 1st principal component which captures the maximum amount of variance from the features in the original data counts for 56%.
From the biplot, we can observe the following connections:
The angle between arrows = the correlation between the features. Small-angle = high positive correlation. There is a high positive correlation between, for instance, ratio_labour and In_parliament, Maternity_mortality and Adorescent_birth.
The angle between a feature and a PC axis = the correlation between the two. Small-angle = high positive correlation. For instance, the following variables are positively correlated with PC1: Maternity_mortality, Adorescent_birth, Years_education, GNI_per_capita, and F_education.
The length of the arrows is proportional to the standard deviations of the features. All variables seem to have quite the same standard deviation, except the In_parliament with a short arrow.
library(FactoMineR)
library(ggplot2)
library(tidyr)
data(tea)
dim(tea)
## [1] 300 36
str(tea)
## 'data.frame': 300 obs. of 36 variables:
## $ breakfast : Factor w/ 2 levels "breakfast","Not.breakfast": 1 1 2 2 1 2 1 2 1 1 ...
## $ tea.time : Factor w/ 2 levels "Not.tea time",..: 1 1 2 1 1 1 2 2 2 1 ...
## $ evening : Factor w/ 2 levels "evening","Not.evening": 2 2 1 2 1 2 2 1 2 1 ...
## $ lunch : Factor w/ 2 levels "lunch","Not.lunch": 2 2 2 2 2 2 2 2 2 2 ...
## $ dinner : Factor w/ 2 levels "dinner","Not.dinner": 2 2 1 1 2 1 2 2 2 2 ...
## $ always : Factor w/ 2 levels "always","Not.always": 2 2 2 2 1 2 2 2 2 2 ...
## $ home : Factor w/ 2 levels "home","Not.home": 1 1 1 1 1 1 1 1 1 1 ...
## $ work : Factor w/ 2 levels "Not.work","work": 1 1 2 1 1 1 1 1 1 1 ...
## $ tearoom : Factor w/ 2 levels "Not.tearoom",..: 1 1 1 1 1 1 1 1 1 2 ...
## $ friends : Factor w/ 2 levels "friends","Not.friends": 2 2 1 2 2 2 1 2 2 2 ...
## $ resto : Factor w/ 2 levels "Not.resto","resto": 1 1 2 1 1 1 1 1 1 1 ...
## $ pub : Factor w/ 2 levels "Not.pub","pub": 1 1 1 1 1 1 1 1 1 1 ...
## $ Tea : Factor w/ 3 levels "black","Earl Grey",..: 1 1 2 2 2 2 2 1 2 1 ...
## $ How : Factor w/ 4 levels "alone","lemon",..: 1 3 1 1 1 1 1 3 3 1 ...
## $ sugar : Factor w/ 2 levels "No.sugar","sugar": 2 1 1 2 1 1 1 1 1 1 ...
## $ how : Factor w/ 3 levels "tea bag","tea bag+unpackaged",..: 1 1 1 1 1 1 1 1 2 2 ...
## $ where : Factor w/ 3 levels "chain store",..: 1 1 1 1 1 1 1 1 2 2 ...
## $ price : Factor w/ 6 levels "p_branded","p_cheap",..: 4 6 6 6 6 3 6 6 5 5 ...
## $ age : int 39 45 47 23 48 21 37 36 40 37 ...
## $ sex : Factor w/ 2 levels "F","M": 2 1 1 2 2 2 2 1 2 2 ...
## $ SPC : Factor w/ 7 levels "employee","middle",..: 2 2 4 6 1 6 5 2 5 5 ...
## $ Sport : Factor w/ 2 levels "Not.sportsman",..: 2 2 2 1 2 2 2 2 2 1 ...
## $ age_Q : Factor w/ 5 levels "15-24","25-34",..: 3 4 4 1 4 1 3 3 3 3 ...
## $ frequency : Factor w/ 4 levels "1/day","1 to 2/week",..: 1 1 3 1 3 1 4 2 3 3 ...
## $ escape.exoticism: Factor w/ 2 levels "escape-exoticism",..: 2 1 2 1 1 2 2 2 2 2 ...
## $ spirituality : Factor w/ 2 levels "Not.spirituality",..: 1 1 1 2 2 1 1 1 1 1 ...
## $ healthy : Factor w/ 2 levels "healthy","Not.healthy": 1 1 1 1 2 1 1 1 2 1 ...
## $ diuretic : Factor w/ 2 levels "diuretic","Not.diuretic": 2 1 1 2 1 2 2 2 2 1 ...
## $ friendliness : Factor w/ 2 levels "friendliness",..: 2 2 1 2 1 2 2 1 2 1 ...
## $ iron.absorption : Factor w/ 2 levels "iron absorption",..: 2 2 2 2 2 2 2 2 2 2 ...
## $ feminine : Factor w/ 2 levels "feminine","Not.feminine": 2 2 2 2 2 2 2 1 2 2 ...
## $ sophisticated : Factor w/ 2 levels "Not.sophisticated",..: 1 1 1 2 1 1 1 2 2 1 ...
## $ slimming : Factor w/ 2 levels "No.slimming",..: 1 1 1 1 1 1 1 1 1 1 ...
## $ exciting : Factor w/ 2 levels "exciting","No.exciting": 2 1 2 2 2 2 2 2 2 2 ...
## $ relaxing : Factor w/ 2 levels "No.relaxing",..: 1 1 2 2 2 2 2 2 2 2 ...
## $ effect.on.health: Factor w/ 2 levels "effect on health",..: 2 2 2 2 2 2 2 2 2 2 ...
# select the 'keep_columns' to create a new dataset to visualize
# column names to keep in the dataset
keep_columns <- c("Tea", "How", "how", "sugar", "where", "lunch")
tea_time <- select(tea, one_of(keep_columns))
# visualize the dataset
gather(tea_time) %>% ggplot(aes(value)) + facet_wrap("key", scales = "free") + geom_bar() + theme(axis.text.x = element_text(angle = 45, hjust = 1, size = 8))
## Warning: attributes are not identical across measure variables;
## they will be dropped
# Multiple Correspondence Analysis to a certain columns of the data
mca <- MCA(tea_time, graph = FALSE)
# summary of the model
summary(mca)
##
## Call:
## MCA(X = tea_time, graph = FALSE)
##
##
## Eigenvalues
## Dim.1 Dim.2 Dim.3 Dim.4 Dim.5 Dim.6 Dim.7
## Variance 0.279 0.261 0.219 0.189 0.177 0.156 0.144
## % of var. 15.238 14.232 11.964 10.333 9.667 8.519 7.841
## Cumulative % of var. 15.238 29.471 41.435 51.768 61.434 69.953 77.794
## Dim.8 Dim.9 Dim.10 Dim.11
## Variance 0.141 0.117 0.087 0.062
## % of var. 7.705 6.392 4.724 3.385
## Cumulative % of var. 85.500 91.891 96.615 100.000
##
## Individuals (the 10 first)
## Dim.1 ctr cos2 Dim.2 ctr cos2 Dim.3
## 1 | -0.298 0.106 0.086 | -0.328 0.137 0.105 | -0.327
## 2 | -0.237 0.067 0.036 | -0.136 0.024 0.012 | -0.695
## 3 | -0.369 0.162 0.231 | -0.300 0.115 0.153 | -0.202
## 4 | -0.530 0.335 0.460 | -0.318 0.129 0.166 | 0.211
## 5 | -0.369 0.162 0.231 | -0.300 0.115 0.153 | -0.202
## 6 | -0.369 0.162 0.231 | -0.300 0.115 0.153 | -0.202
## 7 | -0.369 0.162 0.231 | -0.300 0.115 0.153 | -0.202
## 8 | -0.237 0.067 0.036 | -0.136 0.024 0.012 | -0.695
## 9 | 0.143 0.024 0.012 | 0.871 0.969 0.435 | -0.067
## 10 | 0.476 0.271 0.140 | 0.687 0.604 0.291 | -0.650
## ctr cos2
## 1 0.163 0.104 |
## 2 0.735 0.314 |
## 3 0.062 0.069 |
## 4 0.068 0.073 |
## 5 0.062 0.069 |
## 6 0.062 0.069 |
## 7 0.062 0.069 |
## 8 0.735 0.314 |
## 9 0.007 0.003 |
## 10 0.643 0.261 |
##
## Categories (the 10 first)
## Dim.1 ctr cos2 v.test Dim.2 ctr cos2
## black | 0.473 3.288 0.073 4.677 | 0.094 0.139 0.003
## Earl Grey | -0.264 2.680 0.126 -6.137 | 0.123 0.626 0.027
## green | 0.486 1.547 0.029 2.952 | -0.933 6.111 0.107
## alone | -0.018 0.012 0.001 -0.418 | -0.262 2.841 0.127
## lemon | 0.669 2.938 0.055 4.068 | 0.531 1.979 0.035
## milk | -0.337 1.420 0.030 -3.002 | 0.272 0.990 0.020
## other | 0.288 0.148 0.003 0.876 | 1.820 6.347 0.102
## tea bag | -0.608 12.499 0.483 -12.023 | -0.351 4.459 0.161
## tea bag+unpackaged | 0.350 2.289 0.056 4.088 | 1.024 20.968 0.478
## unpackaged | 1.958 27.432 0.523 12.499 | -1.015 7.898 0.141
## v.test Dim.3 ctr cos2 v.test
## black 0.929 | -1.081 21.888 0.382 -10.692 |
## Earl Grey 2.867 | 0.433 9.160 0.338 10.053 |
## green -5.669 | -0.108 0.098 0.001 -0.659 |
## alone -6.164 | -0.113 0.627 0.024 -2.655 |
## lemon 3.226 | 1.329 14.771 0.218 8.081 |
## milk 2.422 | 0.013 0.003 0.000 0.116 |
## other 5.534 | -2.524 14.526 0.197 -7.676 |
## tea bag -6.941 | -0.065 0.183 0.006 -1.287 |
## tea bag+unpackaged 11.956 | 0.019 0.009 0.000 0.226 |
## unpackaged -6.482 | 0.257 0.602 0.009 1.640 |
##
## Categorical variables (eta2)
## Dim.1 Dim.2 Dim.3
## Tea | 0.126 0.108 0.410 |
## How | 0.076 0.190 0.394 |
## how | 0.708 0.522 0.010 |
## sugar | 0.065 0.001 0.336 |
## where | 0.702 0.681 0.055 |
## lunch | 0.000 0.064 0.111 |
We will use the factoextra R package to help in the interpretation and the visualization of the multiple correspondence analysis.
These are the variances and the percentage of variances retained by each dimension. This proportion of variances retained by the different dimensions (axes) can be extracted using the function get_eigenvalue() as follow:
library(factoextra) # Access the factoextra library
eig_val <- get_eigenvalue(mca)
head(eig_val)
## eigenvalue variance.percent cumulative.variance.percent
## Dim.1 0.2793712 15.238428 15.23843
## Dim.2 0.2609265 14.232352 29.47078
## Dim.3 0.2193358 11.963768 41.43455
## Dim.4 0.1894379 10.332978 51.76753
## Dim.5 0.1772231 9.666715 61.43424
## Dim.6 0.1561774 8.518770 69.95301
To visualize the percentages of inertia explained by each MCA dimensions, use the function fviz_eig() or fviz_screeplot().
fviz_screeplot(mca, addlabels = TRUE, ylim = c(0, 45))
The plot below shows a global pattern within the data. The function fviz_mca_biplot() can also be used to draw the biplot of individuals and variable categories. Here, we use the standard plot() function.
# visualize MCA
plot(mca, invisible=c("ind"), habillage = "quali")
Comments: The distance between variable categories gives a measure of their similarity. For example, tea bag and chain store are more similar than black and lemon, and green is different from all the other categories.
To visualize the correlation between variables and MCA principal dimensions, we use:
fviz_mca_var(mca, choice = "mca.cor",
repel = TRUE, # Avoid text overlapping (slow)
ggtheme = theme_minimal())
Comments: